Goto

Collaborating Authors

 machine learning model


How Data Quality Affects Machine Learning Models for Credit Risk Assessment

Maurino, Andrea

arXiv.org Artificial Intelligence

Machine Learning (ML) models are being increasingly employed for credit risk evaluation, with their effectiveness largely hinging on the quality of the input data. In this paper we investigate the impact of several data quality issues, including missing values, noisy attributes, outliers, and label errors, on the predictive accuracy of the machine learning model used in credit risk assessment. Utilizing an open-source dataset, we introduce controlled data corruption using the Pucktrick library to assess the robustness of 10 frequently used models like Random Forest, SVM, and Logistic Regression and so on. Our experiments show significant differences in model robustness based on the nature and severity of the data degradation. Moreover, the proposed methodology and accompanying tools offer practical support for practitioners seeking to enhance data pipeline robustness, and provide researchers with a flexible framework for further experimentation in data-centric AI contexts.


A robust methodology for long-term sustainability evaluation of Machine Learning models

Paz-Ruza, Jorge, Gama, João, Alonso-Betanzos, Amparo, Guijarro-Berdiñas, Bertha

arXiv.org Artificial Intelligence

Among the many desirable properties of Artificial Intelligence systems, sustainability and efficiency have become increasingly important in the context of worsening climate change, massive water use in data centres, and the need for simpler, faster models in IoT settings. Consequently, there have been not few attempts to both promote and regulate the sustainability of Machine Learning models; the EU's AI Act indicates that the sustainability of AI - in terms of its environmental and social footprint-should be considered when developing and deploying AI pipelines [1], and manifests like that of UNESCO highlight sustainability as one of the core principles of the broader Responsible AI paradigm [2]. However, this seemingly consensual agreement on the importance of sustainability and efficiency for real-world AI systems and the social and regulatory efforts heavily contrasts with the practical applicability of such regulations; without looking further, the AI Act itself defines the requirement for sustainability, but does not indicate what metrics and evaluation pipelines should be considered for a robust, reliable, and practically relevant assessment of the environmental impact of a model. We argue that this lack of comprehensiveness in sustainability recommendations for AI systems does not stem from a careless or sloppy construction of the regulations themselves, but rather from an actual absence of suitable evaluation protocols that are formal, model-agnostic, reproducible, and grounded in real-life usage protocols for the ML lifecycle. The authors of this preprint are aware of a single regulatory standard for measuring AI sustainability, namely UNE 0086 [3], which limits evaluation to the epoch-batch training paradigm of supervised learning systems, rendering it useless for any task or type of learning that deviates from that standard. Although many researchers and companies have made it a habit to report efficiency figures and comparisons (e.g., in terms of emitted CO


Irony Detection in Urdu Text: A Comparative Study Using Machine Learning Models and Large Language Models

Ahmad, Fiaz, Hussain, Nisar, Qasim, Amna, Hafeez, Momina, Sidorov, Muhammad Usman Grigori, Gelbukh, Alexander

arXiv.org Artificial Intelligence

Ironic identification is a challenging task in Natural Language Processing, particularly when dealing with languages that differ in syntax and cultural context. In this work, we aim to detect irony in Urdu by translating an English Ironic Corpus into the Urdu language. We evaluate ten state-of-the-art machine learning algorithms using GloVe and Word2Vec embeddings, and compare their performance with classical methods. Additionally, we fine-tune advanced transformer-based models, including BERT, RoBERTa, LLaMA 2 (7B), LLaMA 3 (8B), and Mistral, to assess the effectiveness of large-scale models in irony detection. Among machine learning models, Gradient Boosting achieved the best performance with an F1-score of 89.18%. Among transformer-based models, LLaMA 3 (8B) achieved the highest performance with an F1-score of 94.61%. These results demonstrate that combining transliteration techniques with modern NLP models enables robust irony detection in Urdu, a historically low-resource language.


When Secure Isn't: Assessing the Security of Machine Learning Model Sharing

Digregorio, Gabriele, Di Gennaro, Marco, Zanero, Stefano, Longari, Stefano, Carminati, Michele

arXiv.org Artificial Intelligence

The rise of model-sharing through frameworks and dedicated hubs makes Machine Learning significantly more accessible. Despite their benefits, these tools expose users to underexplored security risks, while security awareness remains limited among both practitioners and developers. To enable a more security-conscious culture in Machine Learning model sharing, in this paper we evaluate the security posture of frameworks and hubs, assess whether security-oriented mechanisms offer real protection, and survey how users perceive the security narratives surrounding model sharing. Our evaluation shows that most frameworks and hubs address security risks partially at best, often by shifting responsibility to the user. More concerningly, our analysis of frameworks advertising security-oriented settings and complete model sharing uncovered six 0-day vulnerabilities enabling arbitrary code execution. Through this analysis, we debunk the misconceptions that the model-sharing problem is largely solved and that its security can be guaranteed by the file format used for sharing. As expected, our survey shows that the surrounding security narrative leads users to consider security-oriented settings as trustworthy, despite the weaknesses shown in this work. From this, we derive takeaways and suggestions to strengthen the security of model-sharing ecosystems.


Watermarking and Anomaly Detection in Machine Learning Models for LORA RF Fingerprinting

Mahajan, Aarushi, Burleson, Wayne

arXiv.org Artificial Intelligence

Radio frequency fingerprint identification (RFFI) distinguishes wireless devices by the small variations in their analog circuits, avoiding heavy cryptographic authentication. While deep learning on spectrograms improves accuracy, models remain vulnerable to copying, tampering, and evasion. We present a stronger RFFI system combining watermarking for ownership proof and anomaly detection for spotting suspicious inputs. Using a ResNet-34 on log-Mel spectrograms, we embed three watermarks: a simple trigger, an adversarially trained trigger robust to noise and filtering, and a hidden gradient/weight signature. A convolutional Variational Autoencoders (VAE) with Kullback-Leibler (KL) warm-up and free-bits flags off-distribution queries. On the LoRa dataset, our system achieves 94.6% accuracy, 98% watermark success, and 0.94 AUROC, offering verifiable, tamper-resistant authentication.


Our Cars Can Talk: How IoT Brings AI to Vehicles

Agrawal, Amod Kant

arXiv.org Artificial Intelligence

Abstract--Bringing AI to vehicles and enabling them as sensing platforms is key to transforming maintenance from reactive to proactive. Now is the time to integrate AI copilots that speak both languages: machine and driver. This article offers a conceptual and technical perspective intended to spark interdisciplinary dialogue and guide future research and development in intelligent vehicle systems, predictive maintenance, and AI-powered user interaction. Vehicle maintenance remains largely reactive to this day, often triggered by the dreaded check engine light, sometimes at the worst possible time: in the middle of a busy week, or right before a road trip. However, today's vehicles are equipped with a dense network of sensors that can monitor nearly every aspect of performance in real time.


Counterfactual optimization for fault prevention in complex wind energy systems

Carrizosa, Emilio, Fischetti, Martina, Haaker, Roshell, Morales, Juan Miguel

arXiv.org Artificial Intelligence

Machine Learning models are increasingly used in businesses to detect faults and anomalies in complex systems. In this work, we take this approach a step further: beyond merely detecting anomalies, we aim to identify the optimal control strategy that restores the system to a safe state with minimal disruption. We frame this challenge as a counterfactual problem: given a Machine Learning model that classifies system states as either "good" or "anomalous," our goal is to determine the minimal adjustment to the system's control variables (i.e., its current status) that is necessary to return it to the "good" state. To achieve this, we leverage a mathematical model that finds the optimal counterfactual solution while respecting system-specific constraints. Notably, most counterfactual analysis in the literature focuses on individual cases where a person seeks to alter their status relative to a decision made by a classifier--such as for loan approval or medical diagnosis. Our work addresses a fundamentally different challenge: optimizing counterfactuals for a complex energy system, specifically an offshore wind turbine oil-type transformer. This application not only advances counterfactual optimization in a new domain but also opens avenues for broader research in this area. Our tests on real-world data provided by our industrial partner show that our methodology easily adapts to user preferences and brings savings in the order of 3 million e per year in a typical farm. Introduction Energy systems are becoming increasingly more complex, making it more challenging--and more critical--to detect faults early and develop strategies to mitigate them. In this context, Machine Learning (ML) techniques have become an industry standard for early fault detection [16]. Energy companies can monitor various sensor readings from the turbines and apply ML methods to identify potential issues with components. In this paper, we define a fault (or faulty state) as a condition where a component is in an unsafe status, while an anomaly refers to any irregularity that is not necessarily dangerous. Note that faults are a subset of anomalies. When a fault is detected, a controller is immediately activated to prevent severe damage to the turbine. Machine Learning models can detect anomalies in advance, providing companies with a window of time to intervene before faults occur.


Machine Learning Models Have a Supply Chain Problem

Meiklejohn, Sarah, Blauzvern, Hayden, Maruseac, Mihai, Schrock, Spencer, Simon, Laurent, Shumailov, Ilia

arXiv.org Artificial Intelligence

Powerful machine learning (ML) models are now readily available online, which creates exciting possibilities for users who lack the deep technical expertise or substantial computing resources needed to develop them. On the other hand, this type of open ecosystem comes with many risks. In this paper, we argue that the current ecosystem for open ML models contains significant supply-chain risks, some of which have been exploited already in real attacks. These include an attacker replacing a model with something malicious (e.g., malware), or a model being trained using a vulnerable version of a framework or on restricted or poisoned data. We then explore how Sigstore, a solution designed to bring transparency to open-source software supply chains, can be used to bring transparency to open ML models, in terms of enabling model publishers to sign their models and prove properties about the datasets they use.


A Data-Centric Perspective on Evaluating Machine Learning Models for Tabular Data

Neural Information Processing Systems

Tabular data is prevalent in real-world machine learning applications, and new models for supervised learning of tabular data are frequently proposed. Comparative studies assessing performance differences typically have model-centered evaluation setups with overly standardized data preprocessing. This limits the external validity of these studies, as in real-world modeling pipelines, models are typically applied after dataset-specific preprocessing and feature engineering. We address this gap by proposing a data-centric evaluation framework. We select 10 relevant datasets from Kaggle competitions and implement expert-level preprocessing pipelines for each dataset.


PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining

Neural Information Processing Systems

We present PANORAMIA, a privacy leakage measurement framework for machine learning models that relies on membership inference attacks using generated data as non-members. By relying on generated non-member data, PANORAMIA eliminates the common dependency of privacy measurement tools on in-distribution non-member data. As a result, PANORAMIA does not modify the model, training data, or training process, and only requires access to a subset of the training data. We evaluate PANORAMIA on ML models for image and tabular data classification, as well as on large-scale language models.